Variational Methods for Stochastic Optimization
نویسندگان
چکیده
In the study of graphical models, methods based on the concept of variational freeenergy bounds have been widely used for approximating functionals of probability distributions. In this paper, we provide a method based on the same principles that can be applied to problems of stochastic optimization. In particular, this method is based upon the same principles as the generalized EM algorithm. We show that this method turns out to be the general case of a large number of stochastic optimization procedures including the cross-entropy method, population-based incremental learning, genetic algorithms and stochastic hill-climbing.
منابع مشابه
Deterministic Annealing for Stochastic Variational Inference
Stochastic variational inference (SVI) maps posterior inference in latent variable models to nonconvex stochastic optimization. While they enable approximate posterior inference for many otherwise intractable models, variational inference methods suffer from local optima. We introduce deterministic annealing for SVI to overcome this issue. We introduce a temperature parameter that deterministic...
متن کاملStochastic Annealing for Variational Inference
We empirically evaluate a stochastic annealing strategy for Bayesian posterior optimization with variational inference. Variational inference is a deterministic approach to approximate posterior inference in Bayesian models in which a typically non-convex objective function is locally optimized over the parameters of the approximating distribution. We investigate an annealing method for optimiz...
متن کاملGaussian variational approximation with a factor covariance structure
Variational approximation methods have proven to be useful for scaling Bayesian computations to large data sets and highly parametrized models. Applying variational methods involves solving an optimization problem, and recent research in this area has focused on stochastic gradient ascent methods as a general approach to implementation. Here variational approximation is considered for a posteri...
متن کاملFast Second Order Stochastic Backpropagation for Variational Inference
We propose a second-order (Hessian or Hessian-free) based optimization method for variational inference inspired by Gaussian backpropagation, and argue that quasi-Newton optimization can be developed as well. This is accomplished by generalizing the gradient computation in stochastic backpropagation via a reparametrization trick with lower complexity. As an illustrative example, we apply this a...
متن کاملStochastic variational inference for hidden Markov models
Variational inference algorithms have proven successful for Bayesian analysis in large data settings, with recent advances using stochastic variational inference (SVI). However, such methods have largely been studied in independent or exchangeable data settings. We develop an SVI algorithm to learn the parameters of hidden Markov models (HMMs) in a time-dependent data setting. The challenge in ...
متن کامل